-
Notifications
You must be signed in to change notification settings - Fork 0
feat: add completed/incompleted flag in memdisk cache, unlock storage_manager #50
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #50 +/- ##
========================================
Coverage 87.71% 87.71%
========================================
Files 20 20
Lines 2539 3077 +538
Branches 2539 3077 +538
========================================
+ Hits 2227 2699 +472
- Misses 181 207 +26
- Partials 131 171 +40 ☔ View full report in Codecov by Sentry. |
0d97e83
to
9c19d66
Compare
@@ -10,7 +12,8 @@ use super::ReplacerValue; | |||
/// objects. The replacer will start evicting if a new object comes that makes | |||
/// the replacer's size exceeds its max capacity, from the oldest to the newest. | |||
pub struct LruReplacer<K: ReplacerKey, V: ReplacerValue> { | |||
cache_map: LinkedHashMap<K, V>, | |||
// usize is pin count | |||
cache_map: LinkedHashMap<K, (V, usize)>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we make pin_count
a field of ReplacerValue
since it'll be used by all replacers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! but maybe a github issue and do it later since this pr is too large : (
Maybe implement sth like this would introduce less complexity and make the code easier to write & maintain? https://github.com/kaimast/tokio-condvar/blob/main/src/lib.rs |
d5d1a27
to
7e174ec
Compare
Cool! (I never knew a lock can be dropped : (. Updated, PTAL at the latest commit! |
let notified = notify.0.notified(); | ||
drop(status_of_keys); | ||
notified.await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's better to have a customized CondVar
(sth. like this). But we can keep it as is and refactor it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can later add a github issue to record all the refactor actions?
let mut status_of_keys = self.status_of_keys.write().await; | ||
let ((status, size), notify) = status_of_keys.get_mut(&remote_location).unwrap(); | ||
*status = Status::Completed; | ||
*size = bytes_mem_written; | ||
{ | ||
let mut mem_replacer = self.mem_replacer.as_ref().unwrap().lock().await; | ||
mem_replacer.pin(&remote_location, *notify.1.lock().await); | ||
} | ||
notify.0.notify_waiters(); | ||
return Ok(bytes_mem_written); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we adopt the same logic to ensure the atomicity here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let mut status_of_keys = self.status_of_keys.write().await;
has already ensure the atomicity I guess lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I want to move notify.0.notify_waiters();
out of the scope of status_of_keys write lock, so waking threads can immediately access the lock?
let ((status, size), notify) = status_of_keys.get_mut(&remote_location).unwrap(); | ||
*status = Status::Completed; | ||
*size = data_size; | ||
self.disk_replacer | ||
.lock() | ||
.await | ||
.pin(&remote_location, *notify.1.lock().await); | ||
// FIXME: disk_replacer lock should be released here | ||
notify.0.notify_waiters(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also here?
debug!("-------- Evicting Key: {:?} --------", key); | ||
evicted_keys.push(key); | ||
self.size -= cache_value.size(); | ||
self.size -= cache_value.0.size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider constructing another struct to replace this .0
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can later add a github issue to record all the refactor actions : (
The main goal is: during disk write, we don't want to hold any lock.
thread 'server::tests::test_download_file' panicked at server.rs:217:27: error binding to 127.0.0.1:3030: error creating server listener: Address already in use (os error 48)
Solve #27, #57 partially solve #25
TODO1: should add a github issue to write data to network in put_data_to_cache, but this must change current pin/unpin strategy, since currently, the corrrectness relies on every put followed by get.
TODO2: actually in lru.rs, if the front entry is pinned, we should find the next element to evict. Current implementation only have huge problem when it is disk_cache, since the get_data will fail. But I think this scenario is rare, so I keep the current implementation for simplicity. But we need add a github issue.TODO3 (important): put the unpin from get_data_from_cache from mod.rs to memory.rs & disk.rs, at the end oftokio::spawn
, but it needs a lot refactor, so github issue is also needed.